EN FR
EN FR
MODAL - 2018
Overall Objectives
Application Domains
New Results
Bibliography
Overall Objectives
Application Domains
New Results
Bibliography


Section: New Results

Axis 2: Simpler PAC-Bayesian bounds for hostile data

Participant : Benjamin Guedj.

PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution ρ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution π. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as hostile data). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csiszár’s f-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.

Joint work with Pierre Alquier (ENSAE ParisTech). Paper published in Machine Learning: [11].